Conversation
🏷️ Automatic Labeling SummaryThis PR has been automatically labeled based on the files changed and PR metadata. Applied Labels: size-xs Label Categories
For more information, see |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…ords, tags, section updates, Unicode fixes Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…03-20 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
… case in regex Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…zed labels for 14 languages Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
Fixed in c761062:
The |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
| <meta property="article:section" content="The Week Ahead"> | ||
| <meta property="article:tag" content="EU Council"> | ||
| <meta property="article:tag" content="Parliamentary Calendar"> | ||
| <meta property="article:tag" content="Committee Meetings"> | ||
| <meta property="article:tag" content="Trade"> | ||
| <meta property="article:tag" content="Environment"> | ||
| <meta property="article:tag" content="Labour Market"> | ||
| <meta property="article:tag" content="Week Ahead"> |
There was a problem hiding this comment.
The OpenGraph/Twitter/Article metadata section values (article:section and article:tag) are in English on a Swedish page, while the in-page type badge still shows Swedish. For localized pages these metadata fields should stay localized (or remain unchanged) to avoid SEO/social preview language mismatches.
| <title>Elderly Care Crisis and Landerholm Scandal Dominate Interpellations</title> | ||
| <meta name="description" content="15 interpellations target 11 ministers as Social Democrats mount coordinated pressure on elderly care failures, security scandal, and infrastructure gaps"> | ||
| <meta name="keywords" content="elderly care crisis, Landerholm scandal, interpellations, Anna Tenje, Ulf Kristersson, Social Democrats, ministerial accountability, Sami land rights, offshore wind, infrastructure, Mora-Arlanda, opposition strategy, 2026 election, Swedish Parliament, Sweden"> |
There was a problem hiding this comment.
This Swedish localized page now has English title/description/keywords. That contradicts the stated goal of keeping non-English localized metadata intact and also diverges from other Swedish news pages (which keep Swedish metadata). Please revert these fields to Swedish (or avoid updating non-English files when generating content-based metadata).
| { | ||
| "@type": "ListItem", | ||
| "position": 3, | ||
| "name": "Interpellationsdebatter: Regeringen ställs till sv", | ||
| "name": "Elderly Care Crisis and Landerholm Scandal Dominate Interpellations", | ||
| "item": "https://riksdagsmonitor.com/news/2026-03-20-interpellation-debates-sv.html" |
There was a problem hiding this comment.
BreadcrumbList position=2 still points to /news/index.html on this Swedish page. Other localized pages in this PR use the localized index (e.g., /news/index_sv.html). Please update the BreadcrumbList item URL here to the localized index as well.
| * Comprehensive tests for scripts/ai-analysis/pipeline.ts | ||
| * | ||
| * This module is the core AI-first analysis pipeline with 5.35% coverage. | ||
| * Tests cover: | ||
| * - aiAnalysisPipeline exported singleton (analyzeDocuments, refineAnalysis, validateCompleteness) |
There was a problem hiding this comment.
The header comment says pipeline.ts has 5.35% coverage, but this test suite is explicitly intended to raise coverage substantially. Update the comment to avoid becoming immediately stale/misleading (e.g., describe the pre-PR baseline vs current target).
| <title>EU Council Review and Plenary Votes Headline Parliamentary Week</title> | ||
| <meta name="description" content="Riksdag schedule features EU Council reporting, committee sessions on trade, environment, and labour market, plenary votes and interpellation answers March 24-28"> | ||
| <meta name="keywords" content="EU Council, parliamentary calendar, committee meetings, plenary votes, trade policy, environment, agriculture, labour market, constitution, finance, cultural affairs, interpellation answers, Riksdag schedule, Swedish Parliament, Sweden"> |
There was a problem hiding this comment.
This Swedish localized page now has English title/description/keywords. That contradicts the stated goal of keeping non-English localized metadata intact and also diverges from other Swedish news pages (e.g., committee-reports-sv keeps Swedish metadata). Please revert these fields to Swedish (or avoid updating non-English files when generating content-based metadata).
| <meta property="article:section" content="Interpellation Debates"> | ||
| <meta property="article:tag" content="Elderly Care"> | ||
| <meta property="article:tag" content="Security Scandal"> | ||
| <meta property="article:tag" content="Social Democrats"> | ||
| <meta property="article:tag" content="Ministerial Accountability"> | ||
| <meta property="article:tag" content="Infrastructure"> | ||
| <meta property="article:tag" content="Energy Policy"> | ||
| <meta property="article:tag" content="Interpellation Debates"> |
There was a problem hiding this comment.
The OpenGraph/Twitter/Article metadata section values (article:section and article:tag) are in English on a Swedish page. For localized pages these metadata fields should stay localized (or remain unchanged) to avoid SEO/social preview language mismatches.
| <meta property="article:section" content="Interpellation Debates"> | |
| <meta property="article:tag" content="Elderly Care"> | |
| <meta property="article:tag" content="Security Scandal"> | |
| <meta property="article:tag" content="Social Democrats"> | |
| <meta property="article:tag" content="Ministerial Accountability"> | |
| <meta property="article:tag" content="Infrastructure"> | |
| <meta property="article:tag" content="Energy Policy"> | |
| <meta property="article:tag" content="Interpellation Debates"> | |
| <meta property="article:section" content="Interpellationsdebatter"> | |
| <meta property="article:tag" content="Äldreomsorg"> | |
| <meta property="article:tag" content="Säkerhetsskandal"> | |
| <meta property="article:tag" content="Socialdemokraterna"> | |
| <meta property="article:tag" content="Ministeransvar"> | |
| <meta property="article:tag" content="Infrastruktur"> | |
| <meta property="article:tag" content="Energipolitik"> | |
| <meta property="article:tag" content="Interpellationsdebatter"> |
| article_section = self.get_article_section(article_type) | ||
|
|
||
| print(f" Keywords: {len(en_keywords)} content-based") | ||
| print(f" Tags: {', '.join(en_tags[:5])}") | ||
| print(f" Section: {article_section}") |
There was a problem hiding this comment.
article_section is derived from ARTICLE_TYPES (English strings) and then applied to every language variant via update_article_metadata. This is what causes Swedish pages in this PR to end up with English article:section/twitter:data2. Consider either (a) only updating article_section for English files, or (b) making get_article_section(lang, article_type) return localized values per language.
| "articleSection": "Analysis", | ||
| "articleBody": "<h2>Latest Committee Reports</h2> <p class="article-lede">This batch of 10 committee reports spans 5 different committees, reflecting the breadth of legislative activity in the current parliamentary session. The thematic spread reveals the Riksdag&#039;s multi-front policy engagement and the government&#039;s legislative priorities.</p> <h2>Thematic Analysis</h2> <h3>Committee on Social Affairs</h3> <p><em&g...", | ||
| "wordCount": 2988, | ||
| "inLanguage": "en", | ||
| "inLanguage": "nb", | ||
| "keywords": "committee, reports, betänkanden, parliament, committees, reports, Swedish Parliament, Riksdag, politics, Sweden", |
There was a problem hiding this comment.
JSON-LD inLanguage was corrected to "nb", but the page-level is still "no". Other Norwegian Bokmål articles use "nb" (e.g., 2026-03-20-opposition-motions-no.html), so this page should update to "nb" for consistent BCP-47 usage (SEO + accessibility).
| "articleSection": "Analysis", | ||
| "articleBody": "<h2>Government Propositions</h2> <p class="article-lede">The government has submitted 10 new propositions, signalling its policy priorities and the pace of its legislative agenda. Each proposition must navigate committee review and chamber debate, providing insight into the coalition&#039;s strategic direction and its ability to build cross-party support.</p> <h2>Legislative Pipeline</h2> <h3>Justitiedepartementet</h3> &...", | ||
| "wordCount": 3374, | ||
| "inLanguage": "en", | ||
| "inLanguage": "nb", | ||
| "keywords": "government, propositions, parliament, legislation, Swedish Parliament, Riksdag, politics, Sweden", |
There was a problem hiding this comment.
JSON-LD inLanguage was corrected to "nb", but the page-level is still "no". Other Norwegian Bokmål articles use "nb" (e.g., 2026-03-20-opposition-motions-no.html), so this page should update to "nb" for consistent BCP-47 usage (SEO + accessibility).
| """Process all language versions of an article. | ||
|
|
||
| Generates content-based title, description, keywords, tags, and | ||
| article:section from the English version, then applies to all languages. | ||
| """ |
There was a problem hiding this comment.
The docstring says metadata is generated from English and then applied to all languages, but the implementation returns early when english_only is enabled (the default/safe mode). Please adjust the docstring to reflect the actual behavior (apply only to English unless overwrite/translation mode is enabled).
|
@copilot apply changes based on the comments in this thread |
… lang=nb, docstring, article_section Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 49 out of 49 changed files in this pull request and generated 3 comments.
Comments suppressed due to low confidence (1)
scripts/ai-analysis/pipeline.ts:1299
- Interpellation docs are introduced as docType 'ip', but doc-type localization (localizeDocType/DOC_TYPE_DISPLAY) does not currently define an 'ip' label. This will surface as literal "ip" in SWOT entry prefixes (when no passage) and in dashboard/mindmap type distributions. Add an 'ip' entry to the doc-type localization map so 'ip' renders as a proper localized label in all languages (and keep the pipeline using normalizedDocType).
| const govThreats: AnalysisSwotEntry[] = [ | ||
| ...motDocs.slice(0, 2).map(d => buildEnrichedEntry(d, topic, lang, 200)), | ||
| // Interpellations represent direct opposition pressure on government | ||
| ...ipDocs.slice(2, 4).map(d => buildEnrichedEntry(d, topic, lang, 200)), | ||
| ]; |
There was a problem hiding this comment.
Interpellations are only added to government threats via ipDocs.slice(2, 4). With 1–2 interpellations (a plausible small set), threats will get no IP-derived entries and fall back to placeholders, which undermines the stated classification of IPs as government threats/opposition pressure. Consider allocating at least one IP to threats when any are present (or using a split that always populates both weaknesses and threats for small n).
| const oppOpportunities: AnalysisSwotEntry[] = [ | ||
| // Interpellations create debate opportunities for opposition | ||
| ...ipDocs.slice(2, 3).map(d => buildEnrichedEntry(d, topic, lang, 200)), | ||
| ]; |
There was a problem hiding this comment.
Interpellations are only added to parliament opportunities via ipDocs.slice(2, 3). With fewer than 3 IPs, this produces no opportunity entries and forces placeholders, which conflicts with the PR description that interpellations create parliamentary/opposition opportunities (debate opening). Consider ensuring at least one IP contributes to opportunities when IPs exist (without requiring 3+).
| * - aiAnalysisPipeline exported singleton (analyzeDocuments, refineAnalysis, validateCompleteness) | ||
| * - runAnalysisPipeline orchestrator (quick, standard, deep depths) | ||
| * - SWOT generation from document classification (prop, bet, mot, sfs, fpm, skr, pressm, ext) | ||
| * - Policy assessment builder (domains, narrative, confidence) | ||
| * - Watch point generation per document type |
There was a problem hiding this comment.
The test header comment lists the document types covered but omits newly added interpellations ('ip') and also doesn't mention the EU ('eu') normalization case that is tested later. Update the comment so it matches the actual coverage and supported doc types.
|
@copilot apply changes based on the comments in this thread |
…comments Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Applied in 81549ee.
Screenshot (UI impact): No visible UI change — this update is backend analysis/test logic only. |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 49 out of 49 changed files in this pull request and generated no new comments.
Comments suppressed due to low confidence (8)
scripts/ai-analysis/pipeline.ts:1
- The current
ipallocation logic is hard to reason about and can place the same interpellation in multiple quadrants for small sets (e.g., with 2 IPs,ipDocs[1]appears in both government weaknesses and threats). If the intent is to keep quadrants distinct, prefer a single, shared allocation helper that returns disjoint slices (and fall back to placeholders rather than reusing the same doc in multiple quadrants), so tests and downstream UI don’t double-count the same source document.
scripts/ai-analysis/pipeline.ts:1 - The current
ipallocation logic is hard to reason about and can place the same interpellation in multiple quadrants for small sets (e.g., with 2 IPs,ipDocs[1]appears in both government weaknesses and threats). If the intent is to keep quadrants distinct, prefer a single, shared allocation helper that returns disjoint slices (and fall back to placeholders rather than reusing the same doc in multiple quadrants), so tests and downstream UI don’t double-count the same source document.
scripts/ai-analysis/pipeline.ts:1 - The current
ipallocation logic is hard to reason about and can place the same interpellation in multiple quadrants for small sets (e.g., with 2 IPs,ipDocs[1]appears in both government weaknesses and threats). If the intent is to keep quadrants distinct, prefer a single, shared allocation helper that returns disjoint slices (and fall back to placeholders rather than reusing the same doc in multiple quadrants), so tests and downstream UI don’t double-count the same source document.
scripts/ai-analysis/pipeline.ts:1 refineAnalysis()enriches interpellation threats usingipDocs.slice(2, 4)unconditionally, which means for 1–2 interpellations (the case explicitly handled inanalyzeDocuments) no IP threat entries will ever be upgraded to full-text enriched entries. Align the refinement selection with the same small-set logic used inanalyzeDocuments(ideally via a shared helper), so enrichment consistently applies to the IP entries that were actually placed into threats.
scripts/ai-analysis/pipeline.ts:1refineAnalysis()enriches interpellation threats usingipDocs.slice(2, 4)unconditionally, which means for 1–2 interpellations (the case explicitly handled inanalyzeDocuments) no IP threat entries will ever be upgraded to full-text enriched entries. Align the refinement selection with the same small-set logic used inanalyzeDocuments(ideally via a shared helper), so enrichment consistently applies to the IP entries that were actually placed into threats.
tests/ai-analysis-pipeline-coverage.test.ts:1- This assertion is case-sensitive (
includes('proposition')). If the Swedish localization capitalizes the word (e.g., “Propositioner”), the test will fail even though functionality is correct. Use a consistent case-normalized check (e.g., comparewp.title.toLowerCase()), like other tests in this file already do.
tests/ai-analysis-pipeline-coverage.test.ts:1 - Requiring
sh.name.length > 2is not language-safe and can fail for valid short labels in some locales (e.g., CJK stakeholder names can be 2 characters). A more robust assertion is “non-empty string” (and optionally “not purely whitespace”) rather than enforcing a minimum length that’s unrelated to correctness.
scripts/generate-content-based-titles.py:1 - The tag-removal regex is formatting-dependent (requires exactly two leading spaces and a trailing
\n). If files use different indentation or CRLF line endings, oldarticle:tagmeta tags may not be removed, leading to duplicates. Consider making the pattern tolerant to whitespace/line endings (e.g.,r'\\s*<meta property="article:tag" content="[^"]*">\\r?\\n') to ensure consistent cleanup across generated HTML variants.
pipeline.tsinterpellation allocation logic and test coverage commentsipand EU normalization coverage💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.